As a maximal goal, I might seek to test my theories about the detection of generalizable human values (like reciprocity and benevolence) by programming an alife simulation meant to test a toy-model version of agentic interaction and world-model agreement/interoperability through the fine-structure of the simulated agents.
Do you think you will be able to do this in the next 6 weeks? Might be worth scaling this down to “start a framework to test my theories” or something like that
Almost certainly this is way too ambitious for me to do, but I don’t know what “starting a framework” would look like. I guess I don’t have as full an understanding as I’d like of what MATS expects me to come up with/what’s in-bounds? I’d want to come up with a paper or something out of this but I’m also not confident in my ability to (for instance) fully specify the missing pieces of John’s model. Or even one of his missing pieces.
Do you think you will be able to do this in the next 6 weeks? Might be worth scaling this down to “start a framework to test my theories” or something like that
Almost certainly this is way too ambitious for me to do, but I don’t know what “starting a framework” would look like. I guess I don’t have as full an understanding as I’d like of what MATS expects me to come up with/what’s in-bounds? I’d want to come up with a paper or something out of this but I’m also not confident in my ability to (for instance) fully specify the missing pieces of John’s model. Or even one of his missing pieces.